Learning Anisotropic RBF Kernels
نویسندگان
چکیده
We present an approach for learning an anisotropic RBF kernel in a game theoretical setting where the value of the game is the degree of separation between positive and negative training examples. The method extends a previously proposed method (KOMD) to perform feature re-weighting and distance metric learning in a kernel-based classification setting. Experiments on several benchmark datasets demonstrate that our method generally outperforms stateof-the-art distance metric learning methods, including the Large Margin Nearest Neighbor Classification family of methods.
منابع مشابه
Generalized RBF feature maps for Efficient Detection
These kernels combine the benefits of two other important classes of kernels: the homogeneous additive kernels (e.g. the χ2 kernel) and the RBF kernels (e.g. the exponential kernel). However, large scale problems require machine learning techniques of at most linear complexity and these are usually limited to linear kernels. Recently, Maji and Berg [2] and Vedaldi and Zisserman [4] proposed exp...
متن کاملExtreme Learning Machine with Randomly Assigned RBF Kernels
A new learning algorithm called extreme learning machine (ELM) has recently been proposed for single-hidden layer feedforward neural networks (SLFNs) with additive neurons to easily achieve good generalization performance at extremely fast learning speed. ELM randomly chooses the input weights and analytically determines the output weights of SLFNs. It is proved in theory that ELM can be extend...
متن کاملA Comparison Study of Nonlinear Kernels
Compared to linear kernel, nonlinear kernels can often substantially improve the accuracies of many machine learning algorithms. In this paper, we compare 5 different nonlinear kernels: minmax, RBF, fRBF (folded RBF), acos, and acos-χ, on a wide range of publicly available datasets. The proposed fRBF kernel performs very similarly to the RBF kernel. Both RBF and fRBF kernels require an importan...
متن کاملNystrom Method for Approximating the GMM Kernel
The GMM (generalized min-max) kernel was recently proposed [5] as a measure of data similarity and was demonstrated effective in machine learning tasks. In order to use the GMM kernel for large-scale datasets, the prior work resorted to the (generalized) consistent weighted sampling (GCWS) to convert the GMM kernel to linear kernel. We call this approach as “GMM-GCWS”. In the machine learning l...
متن کاملApproximation Networks Based on Shape-adaptive Kernels Using Localized Threshold Decomposition
Approximation networks combined with learning algorithms are being increasingly used to represent or approximate unknown mappings of multivariate functionals. Potential functions in general, and the subclass of radial basis functions (RBF), are examples of this approach where combinations of localized kernels are used to t the relatively sparse available observations. The kernels describe the l...
متن کامل